5 resultados para Duplicate tuples

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

N-tuple recognition systems (RAMnets) are normally modeled using a small number of input lines to each RAM, because the address space grows exponentially with the number of inputs. It is impossible to implement an arbitrarily-large address space as physical memory. But given modest amounts of training data, correspondingly modest numbers of bits will be set in that memory. Hash arrays can therefore be used instead of a direct implementation of the required address space. This paper describes some exploratory experiments using the hash array technique to investigate the performance of RAMnets with very large numbers of input lines. An argument is presented which concludes that performance should peak at a relatively small n-tuple size, but the experiments carried out so far contradict this. Further experiments are needed to confirm this unexpected result.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a discipline, supply chain management (SCM) has traditionally been primarily concerned with the procurement, processing, movement and sale of physical goods. However an important class of products has emerged - digital products - which cannot be described as physical as they do not obey commonly understood physical laws. They do not possess mass or volume, and they require no energy in their manufacture or distribution. With the Internet, they can be distributed at speeds unimaginable in the physical world, and every copy produced is a 100% perfect duplicate of the original version. Furthermore, the ease with which digital products can be replicated has few analogues in the physical world. This paper assesses the effect of non-physicality on one such product – software – in relation to the practice of SCM. It explores the challenges that arise when managing the software supply chain and how practitioners are addressing these challenges. Using a two-pronged exploratory approach that examines the literature around software management as well as direct interviews with software distribution practitioners, a number of key challenges associated with software supply chains are uncovered, along with responses to these challenges. This paper proposes a new model for software supply chains that takes into account the non-physicality of the product being delivered. Central to this model is the replacement of physical flows with flows of intellectual property, the growing importance of innovation over duplication and the increased centrality of the customer in the entire process. Hybrid physical / digital supply chains are discussed and a framework for practitioners concerned with software supply chains is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: To evaluate OneTouch® Verio™ test strip performance at hypoglycaemic blood glucose (BG) levels (<3.9mmol/L [<70mg/dL]) at seven clinical studies. Methods: Trained clinical staff performed duplicate capillary BG monitoring system tests on 700 individuals with type 1 and type 2 diabetes using blood from a single fingerstick lancing. BG reference values were obtained using a YSI 2300 STAT™ Glucose Analyzer. The number and percentage of BG values within ±0.83. mmol/L (±15. mg/dL) and ±0.56. mmol/L (±10. mg/dL) were calculated at BG concentrations of <3.9. mmol/L (<70. mg/dL), <3.3. mmol/L (<60. mg/dL), and <2.8. mmol/L (<50. mg/dL). Results: At BG concentrations <3.9. mmol/L (<70. mg/dL), 674/674 (100%) of meter results were within ±0.83. mmol/L (±15. mg/dL) and 666/674 (98.8%) were within ±0.56. mmol/L (±10. mg/dL) of reference values. At BG concentrations <3.3. mmol/L (<60. mg/dL), and <2.8. mmol/L (<50. mg/dL), 358/358 (100%) and 270/270 (100%) were within ±0.56. mmol/L (±10. mg/dL) of reference values, respectively. Conclusion: In this analysis of data from seven independent studies, OneTouch Verio test strips provide highly accurate results at hypoglycaemic BG levels. © 2012 Elsevier Ireland Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new creep test, Partial Triaxial Test (PTT), was developed to study the permanent deformation properties of asphalt mixtures. The PTT used two duplicate platens whose diameters were smaller than the diameter of the cylindrical asphalt mixtures specimen. One base platen was centrally placed under the specimen and another loading platen was centrally placed on the top surface of the specimen. Then the compressive repeated load was applied on the loading platen and the vertical deformation of the asphalt mixture was recorded in the PTTs. Triaxial repeated load permanent deformation tests (TRT) and PTTs were respectively conducted on AC20 and SMA13 asphalt mixtures at 40°C and 60°C so as to provide the parameters of the creep constitutive relations in the ABAQUS finite element models (FEMs) which were built to simulate the laboratory wheel tracking tests. The real laboratory wheel tracking tests were also conducted on AC20 and SMA13 asphalt mixtures at 40°C and 60°C. Then the calculated rutting depth from the FEMs were compared with the measured rutting depth of the laboratory wheeling tracking tests. Results indicated that PTT was able to characterize the permanent deformation of the asphalt mixtures in laboratory. The rutting depth calculated using the parameters estimated from PTTs' results was closer to and showed better matches with the measured rutting than the rutting depth calculated using the parameters estimated from TRTs' results. Main reason was that PTT could better simulate the changing confinement conditions of asphalt mixtures in the laboratory wheeling tracking tests than the TRT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spamming has been a widespread problem for social networks. In recent years there is an increasing interest in the analysis of anti-spamming for microblogs, such as Twitter. In this paper we present a systematic research on the analysis of spamming in Sina Weibo platform, which is currently a dominant microblogging service provider in China. Our research objectives are to understand the specific spamming behaviors in Sina Weibo and find approaches to identify and block spammers in Sina Weibo based on spamming behavior classifiers. To start with the analysis of spamming behaviors we devise several effective methods to collect a large set of spammer samples, including uses of proactive honeypots and crawlers, keywords based searching and buying spammer samples directly from online merchants. We processed the database associated with these spammer samples and interestingly we found three representative spamming behaviors: Aggressive advertising, repeated duplicate reposting and aggressive following. We extract various features and compare the behaviors of spammers and legitimate users with regard to these features. It is found that spamming behaviors and normal behaviors have distinct characteristics. Based on these findings we design an automatic online spammer identification system. Through tests with real data it is demonstrated that the system can effectively detect the spamming behaviors and identify spammers in Sina Weibo.